EM Demystified: An Expectation-Maximization Tutorial
نویسندگان
چکیده
After a couple of disastrous experiments trying to teach EM, we carefully wrote this tutorial to give you an intuitive and mathematically rigorous understanding of EM and why it works. We explain the standard applications of EM to learning Gaussian mixture models (GMMs) and hidden Markov models (HMMs), and prepare you to apply EM to new problems. This tutorial assumes you have an advanced undergraduate understanding of probability and statistics.
منابع مشابه
The Expectation Maximization Algorithm: A short tutorial
This tutorial discusses the Expectation Maximization (EM) algorithm of Dempster, Laird and Rubin [1]. The approach taken follows that of an unpublished note by Stuart Russel, but fleshes out some of the gory details. In order to ensure that the presentation is reasonably self-contained, some of the results on which the derivation of the algorithm is based are presented prior to the main results...
متن کاملEstimating Gaussian Mixture Densities with EM – A Tutorial
Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables. There are several tutorial introductions to EM, including [8, 5, 2, 7]. These are excellent references for greater generality about EM, several good intuitions, and useful explanations. The purpose of this document is to explain in a more self-contained way how EM can solve a...
متن کاملExpectation Maximization: a Gentle Introduction
This tutorial was basically written for students/researchers who want to get into first touch with the Expectation Maximization (EM) Algorithm. The main motivation for writing this tutorial was the fact that I did not find any text that fitted my needs. I started with the great book “Artificial Intelligence: A modern Approach”of Russel and Norvig [6], which provides lots of intuition, but I was...
متن کاملMixture Models and Expectation-Maximization
This tutorial attempts to provide a gentle introduction to EM by way of simple examples involving maximum-likelihood estimation of mixture-model parameters. Readers familiar with ML paramter estimation and clustering may want to skip directly to Sections 5.2 and 5.3.
متن کاملBeyond EM: Bayesian Techniques for Human Language Technology Researchers
The Expectation-Maximization (EM) algorithm has proved to be a great and useful technique for unsupervised learning problems in natural language, but, unfortunately, its range of applications is largely limited by intractable Eor M-steps, and its reliance on the maximum likelihood estimator. The natural language processing community typically resorts to ad-hoc approximation methods to get (some...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010